Serveur d'exploration sur la musique en Sarre

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

MPEG-4: Audio/video and synthetic graphics/audio for mixed media

Identifieur interne : 000F28 ( Main/Exploration ); précédent : 000F27; suivant : 000F29

MPEG-4: Audio/video and synthetic graphics/audio for mixed media

Auteurs : Peter K. Doenges [États-Unis] ; Tolga K. Capin [Suisse] ; Fabio Lavagetto [Italie] ; Joern Ostermann [États-Unis] ; Igor S. Pandzic [Suisse] ; Eric D. Petajan [États-Unis]

Source :

RBID : ISTEX:77CDD5A141A63B1374B7934C9ADC8CF8B759C417

English descriptors

Abstract

MPEG-4 addresses coding of digital hybrids of natural and synthetic, aural and visual (A/V) information. The objective of this synthetic/natural hybrid coding (SNHC) is to facilitate content-based manipulation, interoperability, and wider user access in the delivery of animated mixed media. SNHC will support non-real-time and passive media delivery, as well as more interactive, real-time applications. Integrated spatial-temporal coding is sought for audio, video, and 2D/3D computer graphics as standardized A/V objects. Targets of standardization include mesh-segmented video coding, compression of geometry, synchronization between A/V objects, multiplexing of streamed A/V objects, and spatial-temporal integration of mixed media types. Composition, interactivity, and scripting of A/V objects can thus be supported in client terminals, as well as in content production for servers, also more effectively enabling terminals as servers. Such A/V objects can exhibit high efficiency in transmission and storage, plus content-based interactivity, spatial-temporal scalability, and combinations of transient dynamic data and persistent downloaded data. This approach can lower bandwidth of mixed media, offer tradeoffs in quality versus update for specific terminals, and foster varied distribution methods for content that exploit spatial and temporal coherence over buses and networks. MPEG-4 responds to trends at home and work to move beyond the paradigm of audio/video as a passive experience to more flexible A/V objects which combine audio/video with synthetic 2D/3D graphics and audio.

Url:
DOI: 10.1016/S0923-5965(97)00007-6


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title>MPEG-4: Audio/video and synthetic graphics/audio for mixed media</title>
<author>
<name sortKey="Doenges, Peter K" sort="Doenges, Peter K" uniqKey="Doenges P" first="Peter K." last="Doenges">Peter K. Doenges</name>
</author>
<author>
<name sortKey="Capin, Tolga K" sort="Capin, Tolga K" uniqKey="Capin T" first="Tolga K." last="Capin">Tolga K. Capin</name>
</author>
<author>
<name sortKey="Lavagetto, Fabio" sort="Lavagetto, Fabio" uniqKey="Lavagetto F" first="Fabio" last="Lavagetto">Fabio Lavagetto</name>
</author>
<author>
<name sortKey="Ostermann, Joern" sort="Ostermann, Joern" uniqKey="Ostermann J" first="Joern" last="Ostermann">Joern Ostermann</name>
</author>
<author>
<name sortKey="Pandzic, Igor S" sort="Pandzic, Igor S" uniqKey="Pandzic I" first="Igor S." last="Pandzic">Igor S. Pandzic</name>
</author>
<author>
<name sortKey="Petajan, Eric D" sort="Petajan, Eric D" uniqKey="Petajan E" first="Eric D." last="Petajan">Eric D. Petajan</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:77CDD5A141A63B1374B7934C9ADC8CF8B759C417</idno>
<date when="1997" year="1997">1997</date>
<idno type="doi">10.1016/S0923-5965(97)00007-6</idno>
<idno type="url">https://api.istex.fr/document/77CDD5A141A63B1374B7934C9ADC8CF8B759C417/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">000C56</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">000C56</idno>
<idno type="wicri:Area/Istex/Curation">000B96</idno>
<idno type="wicri:Area/Istex/Checkpoint">000D13</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Checkpoint">000D13</idno>
<idno type="wicri:doubleKey">0923-5965:1997:Doenges P:mpeg:audio:video</idno>
<idno type="wicri:Area/Main/Merge">000F30</idno>
<idno type="wicri:Area/Main/Curation">000F28</idno>
<idno type="wicri:Area/Main/Exploration">000F28</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a">MPEG-4: Audio/video and synthetic graphics/audio for mixed media</title>
<author>
<name sortKey="Doenges, Peter K" sort="Doenges, Peter K" uniqKey="Doenges P" first="Peter K." last="Doenges">Peter K. Doenges</name>
<affiliation></affiliation>
<affiliation wicri:level="1">
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Evans & Sutherland Computer Corp., 600 Komas Drive, P.O. Box 58700, Salt Lake City, UT 84158</wicri:regionArea>
<wicri:noRegion>UT 84158</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Capin, Tolga K" sort="Capin, Tolga K" uniqKey="Capin T" first="Tolga K." last="Capin">Tolga K. Capin</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Suisse</country>
<wicri:regionArea>Computer Graphics Lab (EPFL-LIG), Swiss Federal Institute of Technology, 1015 Lausanne</wicri:regionArea>
<placeName>
<settlement type="city">Lausanne</settlement>
<region nuts="3" type="region">Canton de Vaud</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Lavagetto, Fabio" sort="Lavagetto, Fabio" uniqKey="Lavagetto F" first="Fabio" last="Lavagetto">Fabio Lavagetto</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Italie</country>
<wicri:regionArea>DIST — Department of Telecommunications, Computer and System Sciences, University of Genoa, Via Opera Pia 13, 16145 Genova</wicri:regionArea>
<wicri:noRegion>16145 Genova</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Ostermann, Joern" sort="Ostermann, Joern" uniqKey="Ostermann J" first="Joern" last="Ostermann">Joern Ostermann</name>
<affiliation wicri:level="1">
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>AT&T Research Laboratories, Room HO 4E518, 101 Crawfords Corner Road, Holmdel, NJ 07733-3030</wicri:regionArea>
<wicri:noRegion>NJ 07733-3030</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Pandzic, Igor S" sort="Pandzic, Igor S" uniqKey="Pandzic I" first="Igor S." last="Pandzic">Igor S. Pandzic</name>
<affiliation wicri:level="4">
<country xml:lang="fr">Suisse</country>
<wicri:regionArea>MIRALab — CUI, University of Geneva, 24 rue du Général-Dufour, CH1211 Geneva 4</wicri:regionArea>
<orgName type="university">Université de Genève</orgName>
<placeName>
<settlement type="city">Genève</settlement>
<region nuts="3" type="region">Canton de Genève</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Petajan, Eric D" sort="Petajan, Eric D" uniqKey="Petajan E" first="Eric D." last="Petajan">Eric D. Petajan</name>
<affiliation wicri:level="1">
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Bell Laboratories, Lucent Technologies, Room 2B-231, 600 Mountain Avenue, Murray Hill, NJ 07974</wicri:regionArea>
<wicri:noRegion>NJ 07974</wicri:noRegion>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j">Signal Processing: Image Communication</title>
<title level="j" type="abbrev">IMAGE</title>
<idno type="ISSN">0923-5965</idno>
<imprint>
<publisher>ELSEVIER</publisher>
<date type="published" when="1997">1997</date>
<biblScope unit="volume">9</biblScope>
<biblScope unit="issue">4</biblScope>
<biblScope unit="page" from="433">433</biblScope>
<biblScope unit="page" to="463">463</biblScope>
</imprint>
<idno type="ISSN">0923-5965</idno>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0923-5965</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="Teeft" xml:lang="en">
<term>Acoustic</term>
<term>Acoustic speech</term>
<term>Acoustic speech information</term>
<term>Activex animation</term>
<term>Algorithm</term>
<term>Anchor applications</term>
<term>Animate</term>
<term>Animated</term>
<term>Animated texture</term>
<term>Animation</term>
<term>Animation parameters</term>
<term>Application developers</term>
<term>Articulation</term>
<term>Articulation parameters</term>
<term>Artificial textures</term>
<term>Audio</term>
<term>Audio objects</term>
<term>August</term>
<term>Aural environments</term>
<term>Automatic lipreading</term>
<term>Bandwidth</term>
<term>Basic geometry</term>
<term>Behavior capabilities</term>
<term>Behavior programming</term>
<term>Bell laboratories</term>
<term>Bell labs</term>
<term>Binary format</term>
<term>Block diagram</term>
<term>Body animation</term>
<term>Body definition</term>
<term>Body modeling</term>
<term>Body models</term>
<term>Broad range</term>
<term>Cache</term>
<term>Chicago mpeg meeting</term>
<term>Client terminals</term>
<term>Coding</term>
<term>Coding efficiency</term>
<term>Collaborative</term>
<term>Collaborative work</term>
<term>Communication applications</term>
<term>Compression</term>
<term>Compression ratios</term>
<term>Computer graphics</term>
<term>Computer graphics models</term>
<term>Computer graphics proc</term>
<term>Computer vision</term>
<term>Content production</term>
<term>Core experiment</term>
<term>Core experiments</term>
<term>Decoder</term>
<term>Decoder algorithms</term>
<term>Deterministic rates</term>
<term>Different levels</term>
<term>Display space</term>
<term>Doenges</term>
<term>Download</term>
<term>Downloaded</term>
<term>Downloaded data</term>
<term>Downloaded object</term>
<term>Downloaded objects</term>
<term>Downloading</term>
<term>Downloads</term>
<term>Dynamic overlay graphics</term>
<term>Efficient coding</term>
<term>Electrical engineering</term>
<term>Evans sutherland computer corp</term>
<term>Face analysis</term>
<term>Face animation</term>
<term>Face features</term>
<term>Facial</term>
<term>Facial agent</term>
<term>Facial animation</term>
<term>Facial expressions</term>
<term>Facial features</term>
<term>Facial model</term>
<term>Frame rates</term>
<term>Functionality</term>
<term>Gaming</term>
<term>Generic snhc object coding</term>
<term>Geometry</term>
<term>Geometry compression</term>
<term>Geometry textures</term>
<term>Graphical objects</term>
<term>Graphical representation</term>
<term>Graphics</term>
<term>Graphics models</term>
<term>Graphics overlay</term>
<term>Head position</term>
<term>Head tilt</term>
<term>Hstts coding</term>
<term>Hybrid</term>
<term>Hybrid coding</term>
<term>Hybrid content</term>
<term>Hybrid media</term>
<term>Ieee</term>
<term>Image communication</term>
<term>Image processing</term>
<term>Image quality</term>
<term>Information graphics</term>
<term>Interactive</term>
<term>Interactivity</term>
<term>Interface</term>
<term>Java media</term>
<term>Lipreading</term>
<term>Local disk</term>
<term>Lucent technologies</term>
<term>Media complexity</term>
<term>Media experiences</term>
<term>Media integration</term>
<term>Media processors</term>
<term>Media types</term>
<term>Model data</term>
<term>Modeling</term>
<term>Mpeg</term>
<term>Mpeg4</term>
<term>Mpeg4 receiver</term>
<term>Mpeg4 video objects</term>
<term>Msdl</term>
<term>Multimedia</term>
<term>Multimedia content</term>
<term>Multiple users</term>
<term>Murray hill</term>
<term>Natural content</term>
<term>Negotiation phase</term>
<term>Networked</term>
<term>Networking</term>
<term>Neural networks</term>
<term>Noisy environments</term>
<term>Nostril</term>
<term>Object behaviors</term>
<term>Object surface</term>
<term>Object types</term>
<term>Other hand</term>
<term>Other issues</term>
<term>Other users</term>
<term>Overlay</term>
<term>Overlay graphics</term>
<term>Parameter</term>
<term>Parameter streams</term>
<term>Plenoptic modeling</term>
<term>Point numbers</term>
<term>Polygon</term>
<term>Polygon mesh</term>
<term>Proc</term>
<term>Progressive transmission</term>
<term>Prosodic parameters</term>
<term>Real time</term>
<term>Remote control</term>
<term>Research assistant</term>
<term>Research interests</term>
<term>Salt lake city</term>
<term>Same manner</term>
<term>Scalability</term>
<term>Scalable</term>
<term>Scene composition</term>
<term>Scene content</term>
<term>Schematic representation</term>
<term>Second level</term>
<term>Server</term>
<term>Several papers</term>
<term>Sevilla mpeg meeting</term>
<term>Signal processing</term>
<term>Similar fashion</term>
<term>Simulation</term>
<term>Skeleton layer</term>
<term>Snhc</term>
<term>Snhc group</term>
<term>Snhc verification model</term>
<term>Special effects</term>
<term>Specific content</term>
<term>Specific session</term>
<term>Speech articulation</term>
<term>Speech coding</term>
<term>Speech parameters</term>
<term>Speech recognition</term>
<term>Speech synthesis</term>
<term>Stable configurations</term>
<term>Standardization</term>
<term>Standardized parameters</term>
<term>Subsequent steps</term>
<term>Support transmission</term>
<term>Surface properties</term>
<term>Synchronization</term>
<term>Synthesis units</term>
<term>Synthetic</term>
<term>Synthetic content</term>
<term>Synthetic environments</term>
<term>Synthetic graphics</term>
<term>Synthetic imagery</term>
<term>Synthetic model</term>
<term>Synthetic models</term>
<term>Synthetic objects</term>
<term>System model</term>
<term>System sciences</term>
<term>Temporal</term>
<term>Temporal coherence</term>
<term>Temporal composition</term>
<term>Temporal scalability</term>
<term>Terminal</term>
<term>Terminal resources</term>
<term>Texture</term>
<term>Texture compression</term>
<term>Texture maps</term>
<term>Therapy device</term>
<term>Third level</term>
<term>Time coding</term>
<term>Time synchronization</term>
<term>User</term>
<term>User interaction</term>
<term>User interface</term>
<term>Various media objects</term>
<term>Vertex positions</term>
<term>Video</term>
<term>Video coding</term>
<term>Video frame</term>
<term>Video objects</term>
<term>Video processing</term>
<term>Video scalability</term>
<term>Video streams</term>
<term>Virtual</term>
<term>Virtual body</term>
<term>Virtual environment</term>
<term>Virtual environments</term>
<term>Virtual humans</term>
<term>Virtual travel agency</term>
<term>Visual analysis</term>
<term>Visual information</term>
<term>Visual speech</term>
<term>Visual speech information</term>
<term>Vocal tract</term>
<term>Vrml</term>
<term>Wide variety</term>
</keywords>
</textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">MPEG-4 addresses coding of digital hybrids of natural and synthetic, aural and visual (A/V) information. The objective of this synthetic/natural hybrid coding (SNHC) is to facilitate content-based manipulation, interoperability, and wider user access in the delivery of animated mixed media. SNHC will support non-real-time and passive media delivery, as well as more interactive, real-time applications. Integrated spatial-temporal coding is sought for audio, video, and 2D/3D computer graphics as standardized A/V objects. Targets of standardization include mesh-segmented video coding, compression of geometry, synchronization between A/V objects, multiplexing of streamed A/V objects, and spatial-temporal integration of mixed media types. Composition, interactivity, and scripting of A/V objects can thus be supported in client terminals, as well as in content production for servers, also more effectively enabling terminals as servers. Such A/V objects can exhibit high efficiency in transmission and storage, plus content-based interactivity, spatial-temporal scalability, and combinations of transient dynamic data and persistent downloaded data. This approach can lower bandwidth of mixed media, offer tradeoffs in quality versus update for specific terminals, and foster varied distribution methods for content that exploit spatial and temporal coherence over buses and networks. MPEG-4 responds to trends at home and work to move beyond the paradigm of audio/video as a passive experience to more flexible A/V objects which combine audio/video with synthetic 2D/3D graphics and audio.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>Italie</li>
<li>Suisse</li>
<li>États-Unis</li>
</country>
<region>
<li>Canton de Genève</li>
<li>Canton de Vaud</li>
</region>
<settlement>
<li>Genève</li>
<li>Lausanne</li>
</settlement>
<orgName>
<li>Université de Genève</li>
</orgName>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Doenges, Peter K" sort="Doenges, Peter K" uniqKey="Doenges P" first="Peter K." last="Doenges">Peter K. Doenges</name>
</noRegion>
<name sortKey="Ostermann, Joern" sort="Ostermann, Joern" uniqKey="Ostermann J" first="Joern" last="Ostermann">Joern Ostermann</name>
<name sortKey="Petajan, Eric D" sort="Petajan, Eric D" uniqKey="Petajan E" first="Eric D." last="Petajan">Eric D. Petajan</name>
</country>
<country name="Suisse">
<region name="Canton de Vaud">
<name sortKey="Capin, Tolga K" sort="Capin, Tolga K" uniqKey="Capin T" first="Tolga K." last="Capin">Tolga K. Capin</name>
</region>
<name sortKey="Pandzic, Igor S" sort="Pandzic, Igor S" uniqKey="Pandzic I" first="Igor S." last="Pandzic">Igor S. Pandzic</name>
</country>
<country name="Italie">
<noRegion>
<name sortKey="Lavagetto, Fabio" sort="Lavagetto, Fabio" uniqKey="Lavagetto F" first="Fabio" last="Lavagetto">Fabio Lavagetto</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sarre/explor/MusicSarreV3/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000F28 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000F28 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sarre
   |area=    MusicSarreV3
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     ISTEX:77CDD5A141A63B1374B7934C9ADC8CF8B759C417
   |texte=   MPEG-4: Audio/video and synthetic graphics/audio for mixed media
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Sun Jul 15 18:16:09 2018. Site generation: Tue Mar 5 19:21:25 2024